116 research outputs found

    Fully leakage-resilient signatures revisited: Graceful degradation, noisy leakage, and construction in the bounded-retrieval model

    Get PDF
    We construct new leakage-resilient signature schemes. Our schemes remain unforgeable against an adversary leaking arbitrary (yet bounded) information on the entire state of the signer (sometimes known as fully leakage resilience), including the random coin tosses of the signing algorithm. The main feature of our constructions is that they offer a graceful degradation of security in situations where standard existential unforgeability is impossible

    Continuously non-malleable codes with split-state refresh

    Get PDF
    Non-malleable codes for the split-state model allow to encode a message into two parts, such that arbitrary independent tampering on each part, and subsequent decoding of the corresponding modified codeword, yields either the same as the original message, or a completely unrelated value. Continuously non-malleable codes further allow to tolerate an unbounded (polynomial) number of tampering attempts, until a decoding error happens. The drawback is that, after an error happens, the system must self-destruct and stop working, otherwise generic attacks become possible. In this paper we propose a solution to this limitation, by leveraging a split-state refreshing procedure. Namely, whenever a decoding error happens, the two parts of an encoding can be locally refreshed (i.e., without any interaction), which allows to avoid the self-destruct mechanism. An additional feature of our security model is that it captures directly security against continual leakage attacks. We give an abstract framework for building such codes in the common reference string model, and provide a concrete instantiation based on the external Diffie-Hellman assumption. Finally, we explore applications in which our notion turns out to be essential. The first application is a signature scheme tolerating an arbitrary polynomial number of split-state tampering attempts, without requiring a self-destruct capability, and in a model where refreshing of the memory happens only after an invalid output is produced. This circumvents an impossibility result from a recent work by Fuijisaki and Xagawa (Asiacrypt 2016). The second application is a compiler for tamper-resilient RAM programs. In comparison to other tamper-resilient compilers, ours has several advantages, among which the fact that, for the first time, it does not rely on the self-destruct feature

    From Known-Plaintext Security to Chosen-Plaintext Security

    Get PDF
    We present a new encryption mode for block ciphers. The mode is efficient and is secure against chosen-plaintext attack (CPA) already if the underlying symmetric cipher is secure against known-plaintext attack (KPA). We prove that known (and widely used) encryption modes as CBC mode and counter mode do not have this property. In particular, we prove that CBC mode using a KPA secure cipher is KPA secure, but need not be CPA secure, and we prove that counter mode using a KPA secure cipher need not be even KPA secure. The analysis is done in a concrete security framework

    Universally Composable Zero-Knowledge Proof of Membership

    Get PDF
    Since its introduction the UC framework by Canetti has received a lot of attention. A contributing factor to its popularity is that it allows to capture a large number of common cryptographic primitives using ideal functionalities and thus can be used to give modular proofs for many cryptographic protocols. However, an important member of the cryptographic family has not yet been captured by an ideal functionality, namely the zero-knowledge proof of membership. We give the first formulation of a UC zero-knowledge proof of membership and show that it is closely related to the notions of straight-line zero-knowledge and simulation soundness

    Lower Bounds for Oblivious Data Structures

    Get PDF
    An oblivious data structure is a data structure where the memory access patterns reveals no information about the operations performed on it. Such data structures were introduced by Wang et al. [ACM SIGSAC'14] and are intended for situations where one wishes to store the data structure at an untrusted server. One way to obtain an oblivious data structure is simply to run a classic data structure on an oblivious RAM (ORAM). Until very recently, this resulted in an overhead of ω(lg⁥n)\omega(\lg n) for the most natural setting of parameters. Moreover, a recent lower bound for ORAMs by Larsen and Nielsen [CRYPTO'18] show that they always incur an overhead of at least Ω(lg⁥n)\Omega(\lg n) if used in a black box manner. To circumvent the ω(lg⁥n)\omega(\lg n) overhead, researchers have instead studied classic data structure problems more directly and have obtained efficient solutions for many such problems such as stacks, queues, deques, priority queues and search trees. However, none of these data structures process operations faster than Θ(lg⁥n)\Theta(\lg n), leaving open the question of whether even faster solutions exist. In this paper, we rule out this possibility by proving Ω(lg⁥n)\Omega(\lg n) lower bounds for oblivious stacks, queues, deques, priority queues and search trees.Comment: To appear at SODA'1

    On the Number of Synchronous Rounds Required for Byzantine Agreement

    Get PDF
    Byzantine agreement is typically considered with respect to either a fully synchronous network or a fully asynchronous one. In the synchronous case, either t+1t+1 deterministic rounds are necessary in order to achieve Byzantine agreement or at least some expected large constant number of rounds. In this paper we examine the question of how many initial synchronous rounds are required for Byzantine agreement if we allow to switch to asynchronous operation afterwards. Let n=h+tn=h+t be the number of parties where hh are honest and tt are corrupted. As the main result we show that, in the model with a public-key infrastructure and signatures, d+O(1)d+O(1) deterministic synchronous rounds are sufficient where dd is the minimal integer such that n−d>3(t−d)n-d>3(t-d). This improves over the t+1t+1 necessary deterministic rounds for almost all cases, and over the exact expected number of rounds in the non-deterministic case for many cases

    Early Stopping for Any Number of Corruptions

    Get PDF
    Minimizing the round complexity of byzantine broadcast is a fundamental question in distributed computing and cryptography. In this work, we present the first early stopping byzantine broadcast protocol that tolerates up to t=n−1t=n-1 malicious corruptions and terminates in O(min⁥{f2,t+1})O(\min\{f^2,t+1\}) rounds for any execution with f≀tf\leq t actual corruptions. Our protocol is deterministic, adaptively secure, and works assuming a plain public key infrastructure. Prior early-stopping protocols all either require honest majority or tolerate only up to t=(1−ϔ)nt=(1-\epsilon)n malicious corruptions while requiring either trusted setup or strong number theoretic hardness assumptions. As our key contribution, we show a novel tool called a polariser that allows us to transfer certificate-based strategies from the honest majority setting to settings with a dishonest majority

    An Efficient Pseudo-Random Generator with Applications to Public-Key Encryption and Constant-Round Multiparty Computation

    Get PDF
    We present a pseudo-random bit generator expanding a uniformly random bit-string r of length k/2, where k is the security parameter, into a pseudo-random bit-string of length 2k − log^2(k) using one modular exponentiation. In contrast to all previous high expansion-rate pseudo-random bit generators, no hashing is necessary. The security of the generator is proved relative to Paillier’s composite degree residuosity assumption. As a first application of our pseudo-random bit generator we exploit its efficiency to optimise Paillier’s crypto-system by a factor of (at least) 2 in both running time and usage of random bits. We then exploit the algebraic properties of the generator to construct an efficient protocol for secure constant-round multiparty function evaluation in the cryptographic setting. This construction gives an improvement in communication complexity over previous protocols in the order of nk^2, where n is the number of participants and k is the security parameter, resulting in a communication complexity of O(nk^2|C|) bits, where C is a Boolean circuit computing the function in question

    LEGO for Two Party Secure Computation

    Get PDF
    The first and still most popular solution for secure two-party computation relies on Yao\u27s garbled circuits. Unfortunately, Yao\u27s construction provide security only against passive adversaries. Several constructions (zero-knowledge compiler, cut-and-choose) are known in order to provide security against active adversaries, but most of them are not efficient enough to be considered practical. In this paper we propose a new approach called LEGO (Large Efficient Garbled-circuit Optimization) for two-party computation, which allows to construct more efficient protocols secure against active adversaries. The basic idea is the following: Alice constructs and provides Bob a set of garbled NAND gates. A fraction of them is checked by Alice giving Bob the randomness used to construct them. When the check goes through, with overwhelming probability there are very few bad gates among the non-checked gates. These gates Bob permutes and connects to a Yao circuit, according to a fault-tolerant circuit design which computes the desired function even in the presence of a few random faulty gates. Finally he evaluates this Yao circuit in the usual way. For large circuits, our protocol offers better performance than any other existing protocol. The protocol is universally composable (UC) in the OT-hybrid model

    Cross&Clean: Amortized Garbled Circuits with Constant Overhead

    Get PDF
    Garbled circuits (GC) are one of the main tools for secure two-party computation. One of the most promising techniques for efficiently achieving active-security in the context of GCs is the so called \emph{cut-and-choose} approach, which in the last few years has received many refinements in terms of the number of garbled circuits which need to be constructed, exchanged and evaluated. In this paper we ask a simple question, namely \emph{how many garbled circuits are needed to achieve active security?} and we propose a novel protocol which achieves active security while using only a constant number of garbled circuits per evaluation in the amortized setting
    • 

    corecore